In this paper, we present a novel control architecture for the online adaptation of bipedal locomotion on inclined obstacles. In particular, we introduce a novel, cost-effective, and versatile foot sensor to detect the proximity of the robot's feet to the ground (bump sensor). By employing this sensor, feedback controllers are implemented to reduce the impact forces during the transition of the swing to stance phase or steeping on inclined unseen obstacles. Compared to conventional sensors based on contact reaction force, this sensor detects the distance to the ground or obstacles before the foot touches the obstacle and therefore provides predictive information to anticipate the obstacles. The controller of the proposed bump sensor interacts with another admittance controller to adjust leg length. The walking experiments show successful locomotion on the unseen inclined obstacle without reducing the locomotion speed with a slope angle of 12. Foot position error causes a hard impact with the ground as a consequence of accumulative error caused by links and connections' deflection (which is manufactured by university tools). The proposed framework drastically reduces the feet' impact with the ground.
translated by 谷歌翻译
Designing a local planner to control tractor-trailer vehicles in forward and backward maneuvering is a challenging control problem in the research community of autonomous driving systems. Considering a critical situation in the stability of tractor-trailer systems, a practical and novel approach is presented to design a non-linear MPC(NMPC) local planner for tractor-trailer autonomous vehicles in both forward and backward maneuvering. The tractor velocity and steering angle are considered to be control variables. The proposed NMPC local planner is designed to handle jackknife situations, avoiding multiple static obstacles, and path following in both forward and backward maneuvering. The challenges mentioned above are converted into a constrained problem that can be handled simultaneously by the proposed NMPC local planner. The direct multiple shooting approach is used to convert the optimal control problem(OCP) into a non-linear programming problem(NLP) that IPOPT solvers can solve in CasADi. The controller performance is evaluated through different backup and forward maneuvering scenarios in the Gazebo simulation environment in real-time. It achieves asymptotic stability in avoiding static obstacles and accurate tracking performance while respecting path constraints. Finally, the proposed NMPC local planner is integrated with an open-source autonomous driving software stack called AutowareAi.
translated by 谷歌翻译
Dialogue models are able to generate coherent and fluent responses, but they can still be challenging to control and may produce non-engaging, unsafe results. This unpredictability diminishes user trust and can hinder the use of the models in the real world. To address this, we introduce DialGuide, a novel framework for controlling dialogue model behavior using natural language rules, or guidelines. These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent. We evaluate DialGuide on three tasks in open-domain dialogue response generation: guideline selection, response generation, and response entailment verification. Our dataset contains 10,737 positive and 15,467 negative dialogue context-response-guideline triplets across two domains - chit-chat and safety. We provide baseline models for the tasks and benchmark their performance. We also demonstrate that DialGuide is effective in the dialogue safety domain, producing safe and engaging responses that follow developer guidelines.
translated by 谷歌翻译
Graph neural networks (GNNs) have been utilized for various natural language processing (NLP) tasks lately. The ability to encode corpus-wide features in graph representation made GNN models popular in various tasks such as document classification. One major shortcoming of such models is that they mainly work on homogeneous graphs, while representing text datasets as graphs requires several node types which leads to a heterogeneous schema. In this paper, we propose a transductive hybrid approach composed of an unsupervised node representation learning model followed by a node classification/edge prediction model. The proposed model is capable of processing heterogeneous graphs to produce unified node embeddings which are then utilized for node classification or link prediction as the downstream task. The proposed model is developed to classify stock market technical analysis reports, which to our knowledge is the first work in this domain. Experiments, which are carried away using a constructed dataset, demonstrate the ability of the model in embedding extraction and the downstream tasks.
translated by 谷歌翻译
Recently, there has been a significant amount of interest in satellite telemetry anomaly detection (AD) using neural networks (NN). For AD purposes, the current approaches focus on either forecasting or reconstruction of the time series, and they cannot measure the level of reliability or the probability of correct detection. Although the Bayesian neural network (BNN)-based approaches are well known for time series uncertainty estimation, they are computationally intractable. In this paper, we present a tractable approximation for BNN based on the Monte Carlo (MC) dropout method for capturing the uncertainty in the satellite telemetry time series, without sacrificing accuracy. For time series forecasting, we employ an NN, which consists of several Long Short-Term Memory (LSTM) layers followed by various dense layers. We employ the MC dropout inside each LSTM layer and before the dense layers for uncertainty estimation. With the proposed uncertainty region and by utilizing a post-processing filter, we can effectively capture the anomaly points. Numerical results show that our proposed time series AD approach outperforms the existing methods from both prediction accuracy and AD perspectives.
translated by 谷歌翻译
In this paper we look into the conjecture of Entezari et al. (2021) which states that if the permutation invariance of neural networks is taken into account, then there is likely no loss barrier to the linear interpolation between SGD solutions. First, we observe that neuron alignment methods alone are insufficient to establish low-barrier linear connectivity between SGD solutions due to a phenomenon we call variance collapse: interpolated deep networks suffer a collapse in the variance of their activations, causing poor performance. Next, we propose REPAIR (REnormalizing Permuted Activations for Interpolation Repair) which mitigates variance collapse by rescaling the preactivations of such interpolated networks. We explore the interaction between our method and the choice of normalization layer, network width, and depth, and demonstrate that using REPAIR on top of neuron alignment methods leads to 60%-100% relative barrier reduction across a wide variety of architecture families and tasks. In particular, we report a 74% barrier reduction for ResNet50 on ImageNet and 90% barrier reduction for ResNet18 on CIFAR10.
translated by 谷歌翻译
高维计算(HDC)是用于数据表示和学习的范式,起源于计算神经科学。HDC将数据表示为高维,低精度向量,可用于学习或召回等各种信息处理任务。高维空间的映射是HDC中的一个基本问题,现有方法在输入数据本身是高维时会遇到可伸缩性问题。在这项工作中,我们探索了一个基于哈希的流媒体编码技术。我们正式表明,这些方法在学习应用程序的性能方面具有可比的保证,同时比现有替代方案更有效。我们在一个流行的高维分类问题上对这些结果进行了实验验证,并表明我们的方法很容易扩展到非常大的数据集。
translated by 谷歌翻译
近年来,深度学习的显着进步主要是由于规模的改进而驱动,在该规模上,更大的模型在较大的数据集上进行了更长的时间表的培训。为了从经验上预测规模的好处,我们主张基于外推损失的更严格的方法,而不是报告最合适的(插值)参数。然后,我们提出了一种从学习曲线可靠地估算缩放定律参数的配方。我们证明,除了来自大型基础评估基准的任务外,除了大型域中,包括图像分类,神经机器翻译(NMT)和语言建模,包括图像分类,神经机器翻译(NMT)和语言建模,它比以前的方法更准确地推断出更准确的方法。最后,我们发布了一个由90个评估任务组成的基准数据集,以促进该领域的研究。
translated by 谷歌翻译
最近的AI算法是黑框模型,其决策难以解释。可解释的AI(XAI)试图通过向客户解释其AI决定,例如决定拒绝贷款申请,以解决缺乏AI的解释性和信任。普遍的智慧是,通过规定完全透明的XAI来调节AI会导致更大的社会福利。本文通过游戏理论模型对一个最大化社会福利的决策制定者,在最大化利润最大化的双重垄断竞争和异性消费者的政策制定者中挑战了这一概念。结果表明XAI调节可能是多余的。实际上,要求完全透明的XAI可能会使公司和客户变得更糟。这揭示了最大化福利和获得可解释的AI输出之间的权衡。我们还讨论了对政策制定者和公司的管理意义。
translated by 谷歌翻译
过去,现实世界中社交网络的图表错过了两个重要元素:连接的多重性和表示时间。为此,在本文中,我们为社交网络提供了一个新的动态异质图表示,其中包括图形的每个组件中的时间,即节点和边缘,每种捕获异质性的不同类型。我们通过提出四个与时间有关的查询和深度学习问题来说明这种表示的力量,这些查询和深度学习问题无法轻易在常规的均匀图表中处理。作为概念的证明,我们介绍了新的社交媒体平台(Steemit)的详细表示,我们用它来说明动态查询功能以及使用图形神经网络(GNNS)的预测任务。结果说明了动态异质图表示对社交网络的模型的力量。鉴于这是一个相对研究的领域,我们还说明了在查询优化方面的未来工作以及异质图结构的新动态预测任务的机会。
translated by 谷歌翻译